Deep Neural Network Approximation Theory

نویسندگان

چکیده

This paper develops fundamental limits of deep neural network learning by characterizing what is possible if no constraints are imposed on the algorithm and amount training data. Concretely, we consider Kolmogorov-optimal approximation through networks with guiding theme being a relation between complexity function (class) to be approximated approximating in terms connectivity memory requirements for storing topology associated quantized weights. The theory develop establishes that approximants markedly different classes, such as unit balls Besov spaces modulation spaces. In addition, provide exponential accuracy-i.e., error decays exponentially number nonzero weights network-of multiplication operation, polynomials, sinusoidal functions, certain smooth functions. Moreover, this holds true even one-dimensional oscillatory textures Weierstrass function-a fractal function, neither which has previously known methods achieving accuracy. We also show sufficiently functions finite-width require strictly smaller than finite-depth wide networks.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Deep Neural Network Approximation using Tensor Sketching

Deep neural networks are powerful learning models that achieve state-of-the-art performance on many computer vision, speech, and language processing tasks. In this paper, we study a fundamental question that arises when designing deep network architectures: Given a target network architecture can we design a “smaller” network architecture that “approximates” the operation of the target network?...

متن کامل

Deep Neural Network Capacity

In recent years, deep neural network exhibits its powerful superiority on information discrimination in many computer vision applications. However, the capacity of deep neural network architecture is still a mystery to the researchers. Intuitively, larger capacity of neural network can always deposit more information to improve the discrimination ability of the model. But, the learnable paramet...

متن کامل

Deep Sequential Neural Network

Neural Networks sequentially build high-level features through their successive layers. We propose here a new neural network model where each layer is associated with a set of candidate mappings. When an input is processed, at each layer, one mapping among these candidates is selected according to a sequential decision process. The resulting model is structured according to a DAG like architect...

متن کامل

Approximation Theory and Neural Networks

In many practical situations, one needs to construct a model for an input/output process. For example, one is interested in the price of a stock five years from now. The rating industry description for the stock typically lists such indicators as the increase in the price over the last year, the last 5 years, 10 years, life of the stock, P/E ratio, and alpha and beta risk factors. The buyer is ...

متن کامل

Provable approximation properties for deep neural networks

We discuss approximation of functions using deep neural nets. Given a function f on a d-dimensional manifold Γ ⊂ R, we construct a sparsely-connected depth-4 neural network and bound its error in approximating f . The size of the network depends on dimension and curvature of the manifold Γ, the complexity of f , in terms of its wavelet description, and only weakly on the ambient dimension m. Es...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Information Theory

سال: 2021

ISSN: ['0018-9448', '1557-9654']

DOI: https://doi.org/10.1109/tit.2021.3062161